11 research outputs found

    Multi-level Decomposition of Probalistic Relations

    Get PDF
    Two methods of decomposition of probabilistic relations are presented in this paper. They consist of splitting relations (blocks) into pairs of smaller blocks related to each other by new variables generated in such a way so as to minimize a cost function which depends on the size and structure of the result. The decomposition is repeated iteratively until a stopping criterion is met. Topology and contents of the resulting structure develop dynamically in the decomposition process and reflect relationships hidden in the data

    Decomposition of Relations: A New Approach to Constructive Induction in Machine Learning and Data Mining -- An Overview

    Get PDF
    This is a review paper that presents work done at Portland State University and associated groups in years 1989 - 2001 in the area of functional decomposition of multivalued functions and relations, as well as some applications of these methods

    An Efficient and Effective Approach to Column-Based Input/Output Encoding in Functional Decomposition

    Get PDF
    Encoding in Curtis-style decompositions is the process of assigning codes to groups of compatible columns (or cubes) so that the binary logic descriptions of the predecessor and successor sub-functions can be created for further decomposition. In doing so, the sub-functions created are functionally equivalent to the set of care values specified in the original function. In this paper an input/output encoding algorithm DC_ENC is presented that is designed to achieve the simplest total complexity of the predecessor and successor sub-functions, and to increase the total number of don\u27t cares for their further utilization in subsequent decomposition steps of these sub-functions

    Constructive Induction Machines for Data Mining

    Get PDF
    Learning Hardware approach involves creating a computational network based on feedback from the environment (for instance, positive and negative examples from the trainer), and realizing this network in an array of Field Programmable Gate Arrays (FPGAs). Computational networks can be built based on incremental supervised learning (Neural Net training) or global construction (Decision Tree design). Here we advocate the approach to Learning Hardware based on Constructive Induction methods of Machine Learning (ML) using multivalued functions. This is contrasted with the Evolvable Hardware (EHW) approach in which learning/evolution is based on the genetic algorithm only. Various approaches to supervised inductive learning for Data Mining and Machine Learning applications require fast operations on complex logic expressions and solving some NP-complete problems such as graph-coloring or set covering. They should be realized therefore in hardware to obtain the necessary speed-ups. Using a fast prototyping tool; the DEC-PERLE-l board based on an array of Xilinx FPGAs, we are developing virtual processors that accelerate the design and optimization of decomposed networks of arbitrary logic blocks

    A Survey of Literature on Function Decomposition -- Version IV

    No full text
    This report surveys the literature on decomposition of binary, multiple-valued, and fuzzy functions. It gives also references to relevant basic logic synthesis papers that concern topics important for decomposition, such as for instance representation of Boolean functions, or symmetry of Boolean functions. As a result of the analysis of the most successful decomposition programs for Ashenhurst-Curtis Decomposition, several conclusions are derived that should allow to create a new program that will be able to outperform all the existing approaches to decomposition. Creating such a superior program is necessary to make it practically useful for applications that are of interest to Pattern Theory group at Avionics Labs of Wright Laboratories. In addition, the program will be also able to solve problems that have been never formulated before. It will be a test-bed to develop and compare several known and new partial ideas related to decomposition. Our emphasis is on the following topics: 1. representation of data and efficient algorithms for data manipulation, 2. variable ordering methods for variable partitioning to create bound and free sets of input variables; heuristic approaches and their comparison, 3. column compatibility problem, 4. subfunction encoding problem, 5. use of partial and total symmetries in data to decrease the decomposition search space, 6. methods of dealing with strongly unspecified functions which are typical for machine learning applications, 7. special types of decomposition, that can be efficiently handled (cascades, trees without variable repetition)

    A survey of literature on function decomposition

    No full text
    This report surveys the literature on decomposition of binary, multiple-valued, and fuzzy functions. It gives also references to relevant basic logic synthesis papers that concern topics important for decomposition, such as for instance representation of Boolean functions, or symmetry of Boolean functions. As a result of the analysis of the most successful decomposition programs for Ashenhurst-Curtis Decomposition, several conclusions are derived that should allow to create a new program that will be able to outperform all the existing approaches to decomposition. Creating such asuperior program is necessary to make it practically useful for applications that are of interest to Pattern Theory group at Avionics Labs of Wright Laboratories. In addition, the program will be also able to solve problems that have been never formulated before. It will be a test-bed to develop and compare several known and new partial ideas related to decomposition. Our emphasis is on the following topics: 1. representation of data and e cient algorithms for data manipulation, 2. variable ordering methods for variable partitioning to create bound and free sets of input variables � heuristic approaches and their comparison, 3. column compatibility problem, 4. subfunction encoding problem, 5. use of partial and total symmetries in data to decrease the decomposition search space, 6. methods of dealing with strongly unspeci ed functions which are typical for machine learning applications, 7. special types of decomposition, that can be e ciently handled (cascades, trees without variable repetition). 2 We would like toacknowledge Dr. Tim Ross, Mr. Mark Axtell, and Professors Robert Brayton

    Constructive Induction Machines for Data Mining

    Get PDF
    Learning Hardware approach involves creating a computational network based on feedback from the environment (for instance, positive and negative examples from the trainer), and realizing this network in an array of Field Programmable Gate Arrays (FPGAs). Computational networks can be built based on incremental supervised learning (Neural Net training) or global construction (Decision Tree design). Here we advocate the approach to Learning Hardware based on Constructive Induction methods of Machine Learning (ML) using multivalued functions. This is contrasted with the Evolvable Hardware (EHW) approach in which learning/evolution is based on the genetic algorithm only

    Development Search of Strategies for MULTIS

    No full text
    Although every Functional Decomposer program from the literature uses certain strategy for finding bound and free sets of variables (variable partitioning), and/or selecting various partial decomposition processes or auxiliary decomposition subroutines, there is nothing published on comparing the different decomposition strategies. By general search strategies we understand programmed methods, like for instance deciding whether to execute a decomposition (and which type of), or to continue looking for a better bound set. Using our decomposer MULTIS we found that the general search strategies of the decomposer influence its cost/speed tradeoff more than any other of its single components, such as the encoding algorithm or the column minimization algorithm. In thi
    corecore